ECE 901 Lecture 4: Estimation of Lipschitz smooth functions

نویسنده

  • R. Nowak
چکیده

Consider the following setting. Let Y = f∗(X) +W, where X is a random variable (r.v.) on X = [0, 1], W is a r.v. on Y = R, independent of X and satisfying E[W ] = 0 and E[W ] = σ <∞. Finally let f∗ : [0, 1]→ R be a function satisfying |f∗(t)− f∗(s)| ≤ L|t− s|, ∀t, s ∈ [0, 1], (1) where L > 0 is a constant. A function satisfying condition (1) is said to be Lipschitz on [0, 1]. Notice that such a function must be continuous, but it is not necessarily differentiable. An example of such a function is depicted in Figure (a).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ECE 901 Lecture 15: Denoising Smooth Functions with Unknown Smoothness

Lipschitz functions are interesting, but can be very rough (these can have many kinks). In many situations the functions can be much smoother. This is how you would model the temperature inside a museum room for example. Often we don’t know how smooth the function might be, so an interesting question is if we can adapt to the unknown smoothness. In this lecture we will use the Maximum Complexit...

متن کامل

ELEN6887 Lecture 15: Denoising Smooth Functions with Unknown Smoothness

Lipschitz functions are interesting, but can be very rough (these can have many kinks). In many situations the functions can be much smoother. This is how you would model the temperature inside a museum room for example. Often we don’t know how smooth the function might be, so an interesting question is if we can adapt to the unknown smoothness. In this lecture we will use the Maximum Complexit...

متن کامل

ELEN6887 Lecture 14: Denoising Smooth Functions with Unknown Smoothness

Lipschitz functions are interesting, but can be very rough (these can have many kinks). In many situations the functions can be much smoother. This is how you would model the temperature inside a museum room for example. Often we don’t know how smooth the function might be, so an interesting question is if we can adapt to the unknown smoothness. In this lecture we will use the Maximum Complexit...

متن کامل

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

Yi i.i.d. ∼ pθ∗ , i = {1, . . . , n} where θ∗ ∈ Θ. We can view pθ∗ as a member of a parametric class of distributions, P = {pθ}θ∈Θ. Our goal is to use the observations {Yi} to select an appropriate distribution (e.g., model) from P. We would like the selected distribution to be close to pθ in some sense. We use the negative log-likelihood loss function, defined as l(θ, Yi) = − log pθ(Yi). The e...

متن کامل

Quasi-Gap and Gap Functions for Non-Smooth Multi-Objective Semi-Infinite Optimization Problems

In this paper‎, ‎we introduce and study some new single-valued gap functions for non-differentiable semi-infinite multiobjective optimization problems with locally Lipschitz data‎. ‎Since one of the fundamental properties of gap function for optimization problems is its abilities in characterizing the solutions of the problem in question‎, ‎then the essential properties of the newly introduced ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009